On probabilistic inference in relational conditional logics
نویسندگان
چکیده
The principle of maximum entropy has proven to be a powerful approach for commonsense reasoning in probabilistic conditional logics on propositional languages. Due to this principle, reasoning is performed based on the unique model of a knowledge base that has maximum entropy. This kind of model-based inference fulfills many desirable properties for inductive inference mechanisms and is usually the best choice for reasoning from an information theoretical point of view. However, the expressive power of propositional formalisms for probabilistic reasoning is limited and in the past few years many proposals have been given for probabilistic reasoning in relational settings. It seems to be a common view that in order to interpret probabilistic first-order sentences, either a statistical approach that counts (tuples of) individuals has to be used, or the knowledge base has to be grounded to make a possible worlds semantics applicable, for a subjective interpretation of probabilities. Most of these proposals of the second type rely on extensions of traditional probabilistic models like Bayes nets or Markov networks whereas there are only few works on first-order extensions of probabilistic conditional logic. Here, we take an approach of lifting maximum entropy methods to the relational case by employing a relational version of probabilistic conditional logic. First, we propose two different semantics and model theories for interpreting first-order probabilistic conditional logic. We address the problems of ambiguity that are raised by the difference between subjective and statistical views, and develop a comprehensive list of desirable properties for inductive model-based probabilistic inference in relational frameworks. Finally, by applying the principle of maximum entropy in the two different semantical frameworks, we obtain inference operators that fulfill these properties and turn out to be reasonable choices for reasoning in first-order probabilistic conditional logic.
منابع مشابه
Scalable Statistical Relational Learning for NLP
Prerequisites: No prior knowledge of statistical relational learning is required. Abstract: Statistical Relational Learning (SRL) is an interdisciplinary research area that combines firstorder logic and machine learning methods for probabilistic inference. Although many Natural Language Processing (NLP) tasks (including text classification, semantic parsing, information extraction, coreferenc...
متن کاملRepresenting Statistical Information and Degrees of Belief in First-Order Probabilistic Conditional Logic
Employing maximum entropy methods on probabilistic conditional logic has proven to be a useful approach for commonsense reasoning. Yet, the expressive power of this logic and similar formalisms is limited due to their foundations on propositional logic and in the past few years a lot of proposals have been made for probabilistic reasoning in relational settings. Most of these proposals rely on ...
متن کاملA Software System for the Computation, Visualization, and Comparison of Conditional Structures for Relational Probabilistic Knowledge Bases
Combining logic with probabilities is a core idea to uncertain reasoning. Recently, approaches to probabilistic conditional logics based on first-order languages have been proposed that employ the principle of maximum entropy (ME), e.g. the logic FO-PCL. In order to simplify the ME model computation, FO-PCL knowledge bases can be transformed so that they become parametrically uniform. On the ot...
متن کاملOn Prototypical Indifference and Lifted Inference in Relational Probabilistic Conditional Logic
Semantics for formal models of probabilistic reasoning rely on probability functions that are defined on the interpretations of the underlying classical logic. When this underlying logic is of relational nature, i. e. a fragment of first-order logic, then the space needed for representing these probability functions explicitly is exponential in both the number of predicates and the number of do...
متن کاملLearning to Reason with a Scalable Probabilistic Logic
Learning to reason and understand the world’s knowledge is a fundamental problem in Artificial Intelligence (AI). Traditional symbolic AI methods were popular in the 1980s, when first-order logic rules were mostly handwritten, and reasoning algorithms were built on top of them. In the 90s, more and more researchers became interested in statistical methods that deal with the uncertainty of the d...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Logic Journal of the IGPL
دوره 20 شماره
صفحات -
تاریخ انتشار 2012